由于人工智能的改进,扬声器识别(SI)技术带来了一个伟大的方向,现在广泛用于各种各样的领域。Si最重要的组件之一是特征提取,对Si过程和性能具有显着影响。结果,彻底研究,对比和分析了许多特征提取策略。本文利用了情绪环境下伪装声音中的发言者识别五个不同的特征提取方法。为了显着评估这项工作,使用了三种效果:高倾斜,低音和电子语音转换(EVC)。实验结果报道称,级联的熔融频率谱系数(MFCCs),MFCCS-DERTA和MFCCS-DELTA-DELTA是最佳特征提取方法。
translated by 谷歌翻译
最近的语音情绪识别分析与使用MFCCS频谱图特征和实现诸如卷积神经网络(CNNS)的神经网络方法的实施进行了相当大的进展。胶囊网络(CAPSNET)对CNN的替代品感谢其具有较大容量的分层表示。为了解决这些问题,本研究介绍了独立于文本和独立的讲话者独立的SER新颖体系结构,其中基于结构特征提出了双通道长短短期内存压缩帽(DC-LSTM Compsnet)算法Capsnet。我们所提出的新型分类器可以确保语音情感识别中模型和足够的压缩方法的能效,这不会通过彩铃的原始结构提供。此外,网格搜索方法用于获得最佳解决方案。结果目睹了培训和测试运行时间的性能和减少。用于评估我们的算法的语音数据集是:阿拉伯语Emirati-Egrented语料库,模拟和实际压力语料库下的英语演讲,情感语音和歌曲语料库的英语Ryerson Audio-Visual数据库,以及人群源性情绪多模式演员数据集。这项工作揭示了与其他已知方法相比的最佳特征提取方法是MFCCS Delta-Delta。使用四个数据集和MFCCS Delta-Delta,DC-LSTM CompsNet超越了所有最先进的系统,古典分类器,CNN和原始帽。我们的结果表明,基于Capsnet的拟议工作产生了89.3%的平均情绪识别准确性,其结果表明,拟议的工作产生了89.3%的89.3%。 CNN,支持向量机,多层Perceptron,K-最近邻居,径向基函数和幼稚贝叶斯。
translated by 谷歌翻译
身份验证系统容易受到模型反演攻击的影响,在这种攻击中,对手能够近似目标机器学习模型的倒数。生物识别模型是这种攻击的主要候选者。这是因为反相生物特征模型允许攻击者产生逼真的生物识别输入,以使生物识别认证系统欺骗。进行成功模型反转攻击的主要限制之一是所需的训练数据量。在这项工作中,我们专注于虹膜和面部生物识别系统,并提出了一种新技术,可大大减少必要的训练数据量。通过利用多个模型的输出,我们能够使用1/10进行模型反演攻击,以艾哈迈德和富勒(IJCB 2020)的训练集大小(IJCB 2020)进行虹膜数据,而Mai等人的训练集大小为1/1000。 (模式分析和机器智能2019)的面部数据。我们将新的攻击技术表示为结构性随机,并损失对齐。我们的攻击是黑框,不需要了解目标神经网络的权重,只需要输出向量的维度和值。为了显示对齐损失的多功能性,我们将攻击框架应用于会员推理的任务(Shokri等,IEEE S&P 2017),对生物识别数据。对于IRIS,针对分类网络的会员推断攻击从52%提高到62%的准确性。
translated by 谷歌翻译
自从有新闻以来,假新闻一直存在,从谣言到印刷媒体再到广播电视。最近,信息时代及其沟通和互联网突破加剧了假新闻的传播。此外,除了电子商务外,当前的互联网经济取决于广告,视图和点击,这促使许多开发人员诱饵最终用户点击链接或广告。因此,假新闻通过社交媒体网络的狂野传播影响了现实世界中的问题,从选举到5G的采用以及Covid-19大流行的处理。自虚假新闻出现以来,从事实检查员到基于人工智能的探测器,探测和阻止假新闻的努力就一直存在。由于假新闻传播器采用了更复杂的技术,因此解决方案仍在不断发展。在本文中,R代码已用于研究和可视化现代假新闻数据集。我们使用聚类,分类,相关性和各种图来分析和呈现数据。该实验显示了分类器在与虚假新闻中分开的效率高效率。
translated by 谷歌翻译
在这个大数据时代,当前一代很难从在线平台中包含的大量数据中找到正确的数据。在这种情况下,需要一个信息过滤系统,可以帮助他们找到所需的信息。近年来,出现了一个称为推荐系统的研究领域。推荐人变得重要,因为他们拥有许多现实生活应用。本文回顾了推荐系统在电子商务,电子商务,电子资源,电子政务,电子学习和电子生活中的不同技术和发展。通过分析有关该主题的最新工作,我们将能够详细概述当前的发展,并确定建议系统中的现有困难。最终结果为从业者和研究人员提供了对建议系统及其应用的必要指导和见解。
translated by 谷歌翻译
机器学习(ML)方法已成为解决车辆路由问题的有用工具,可以与流行的启发式方法或独立模型结合使用。但是,当解决不同大小或不同分布的问题时,当前的方法的概括不佳。结果,车辆路由中的ML见证了一个扩展阶段,为特定问题实例创建了新方法,这些方法在较大的问题大小上变得不可行。本文旨在通过理解和改善当前现有模型,即Kool等人的注意模型来鼓励该领域的整合。我们确定了VRP概括的两个差异类别。第一个是基于问题本身固有的差异,第二个与限制模型概括能力的建筑弱点有关。我们的贡献变成了三倍:我们首先通过适应Kool等人来靶向模型差异。方法及其基于alpha-entmax激活的稀疏动态注意力的损耗函数。然后,我们通过使用混合实例训练方法来靶向固有的差异,该方法已被证明在某些情况下超过了单个实例培训。最后,我们介绍了推理水平数据增强的框架,该框架通过利用模型缺乏旋转和扩张变化的不变性来提高性能。
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
This paper presents our solutions for the MediaEval 2022 task on DisasterMM. The task is composed of two subtasks, namely (i) Relevance Classification of Twitter Posts (RCTP), and (ii) Location Extraction from Twitter Texts (LETT). The RCTP subtask aims at differentiating flood-related and non-relevant social posts while LETT is a Named Entity Recognition (NER) task and aims at the extraction of location information from the text. For RCTP, we proposed four different solutions based on BERT, RoBERTa, Distil BERT, and ALBERT obtaining an F1-score of 0.7934, 0.7970, 0.7613, and 0.7924, respectively. For LETT, we used three models namely BERT, RoBERTa, and Distil BERTA obtaining an F1-score of 0.6256, 0.6744, and 0.6723, respectively.
translated by 谷歌翻译
In recent years, social media has been widely explored as a potential source of communication and information in disasters and emergency situations. Several interesting works and case studies of disaster analytics exploring different aspects of natural disasters have been already conducted. Along with the great potential, disaster analytics comes with several challenges mainly due to the nature of social media content. In this paper, we explore one such challenge and propose a text classification framework to deal with Twitter noisy data. More specifically, we employed several transformers both individually and in combination, so as to differentiate between relevant and non-relevant Twitter posts, achieving the highest F1-score of 0.87.
translated by 谷歌翻译